Reducing Web Latency Using Reference Point Caching

نویسندگان

  • Girish P. Chandranmenon
  • George Varghese
چکیده

To reduce web access latencies, we propose a new paradigm for caching at the reference point of a document. If a document is referred to from a document , information is cached at to reduce the latency of client accesses to . We focus on two specific instances of this paradigm: caching IP addresses to avoid DNS lookups at clients, and caching information about documents to avoid setting up new connections. Avoiding DNS lookup saves over 4 seconds 10-12% of the time and avoiding connection setup saves 240ms on the average. These ideas enable new services such as search engines that return IP addresses to speed up search sessions, and caching at regional information servers that goes beyond the capabilities of today’s proxy caching. Keywords—WWW, Latency, Precomputing I. BANDWIDTH VS LATENCY The bandwidth of a transmission technology is the number of bits the technology can carry per second, while the latency is the time it takes to transfer one bit between the transmission endpoints. The improvements in network technology have been primarily in bandwidth rather than latency. In fact, improvements to gain higher bandwidth have, at times, increased latency.1 As network technology becomes less dominated by bandwidth limitations, the number of round-trip times spent for protocol handshakes will become a dominant component in the overall transfer time. We use four observations to support this claim. (1) Web traffic dominates the current Internet traffic, and most documents accessed are fairly small; for example the study of file access patterns conducted by SPEC [1] shows that 50% of accessed files are 5 Kbytes or less. (2) Even for large files, round trip delays will dominate if the bandwidth is high; for example, a 1Mbyte file transfer will require 36ms (latency at the speed of light) + 8.4ms (transfer time at 1 Gb/s) across the continental USA. (3) Real round trip times are much worse than speed of light calculations. Crovella and Carter [2] report that round trip latencies, as measured from a fixed host in their network in Boston University to 5262 random servers, have a median of 125ms and a mean of 241ms. Cheshire [3] further reports that latencies through current modems are very high (110ms and up, some even 300ms).2 This is important because the majority of Internet access is from a PC through modem banks. (4) Studies of web caching [4], [5] report that web cache hit rates rarely, if ever, go beyond 70%. Work done while at Washington University, St. Louis Examples include compression technology used in modems that gain bandwidth at the cost of waiting for more bytes from the application, and the use of pipelines in routers and end node adaptors. This is partly explained in [3] by the need to copy bytes to a serial port, and the temporary buffering imposed by modem compression algorithms. Cheshire [3] argues that an access latency of around 100ms is necessary for applications to have an interactive feel. Even a target latency of around 200ms allows only 3 coast-to-coast round trip delays. Thus even in a perfect world, where bandwidth is cheap and the clients and servers are infinitely fast, users may still see large access latencies that limit their productivity. We propose a general paradigm, reference point caching, for reducing RTTs by caching precomputed information about documents at points where the documents are referenced. Within this paradigm, we propose two specific mechanisms: reference point caching of IP addresses and documents. Caching of IP addresses reduces the latencies associated with DNS lookup — we show through measurements that avoiding DNS lookup saves 100-300ms on the average, and often on the order of seconds. Reference point caching of documents generalizes the scope of web caching to allow documents to be cached at any reference point instead of only on the path between the client and the server. It avoids connection setup and our measurements indicate that it can save 240ms on the average. Since a goal for interactive response is around 200 ms, these are significant savings. The rest of the paper is organized as follows. In Section II we use measurements to quantify the major latency components of a web access. We use these measurements to motivate our proposed reference point caching paradigm in Section III. We then describe and evaluate two specific instantiations of reference point caching, namely caching IP addresses (Section IV) and caching documents (Section V). With each mechanism, we also discuss a few policies that can be used for the server and for the client. II. WEB LATENCY COMPONENTS

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Web Access Latency Reduction Using CRF-Based Predictive Caching

Reducing the Web access latency perceived by a Web user has become a problem of interest. Web prefetching and caching are two effective techniques that can be used together to reduce the access latency problem on the Internet. Because the success of Web prefetching mainly relies on the prediction accuracy of prediction methods, in this paper we employ a powerful sequential learning model, Condi...

متن کامل

Exploring the Bounds of Web Latency Reduction from Caching and Prefetching

Prefetching and caching are techniques commonly used in I/O systems to reduce latency. Many researchers have advocated the use of caching and prefetching to reduce latency in the Web. We derive several bounds on the performance improvements seen from these techniques, and then use traces of Web proxy activity taken at Digital Equipment Corporation to quantify these bounds. We found that for the...

متن کامل

DynCoDe : An Architecture for Transparent Dynamic Content Delivery

Delivery of web content is increasingly using dynamic and personalized content. Caching has been extensively studied for reducing the client latency and bandwidth requirements for static content. There has been recent interest in schemes to exploit locality in dynamic web content [1, 2]. We propose a novel scheme that integrates the distribution and caching of personalized content which rely he...

متن کامل

Efficient Data Distribution in a Web Server Farm

High-performance Web sites rely on Web server “farms”—hundreds of computers serving the same content—for scalability, reliability, and low-latency access to Internet content. Deploying these scalable farms typically requires the power of distributed or clustered file systems. Building Web server farms on file systems complements hierarchical proxy caching.1 Proxy caching replicates Web content ...

متن کامل

Cruz Efficient Data Distribution in a Web Server Farm

High-performance Web sites rely on Web server “farms”—hundreds of computers serving the same content—for scalability, reliability, and low-latency access to Internet content. Deploying these scalable farms typically requires the power of distributed or clustered file systems. Building Web server farms on file systems complements hierarchical proxy caching.1 Proxy caching replicates Web content ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2001